abusive behavior
Ensuring Fair LLM Serving Amid Diverse Applications
Khan, Redwan Ibne Seraj, Jain, Kunal, Shen, Haiying, Mallick, Ankur, Parayil, Anjaly, Kulkarni, Anoop, Kofsky, Steve, Choudhary, Pankhuri, Amant, Renèe St., Wang, Rujia, Cheng, Yue, Butt, Ali R., Rühle, Victor, Bansal, Chetan, Rajmohan, Saravan
In a multi-tenant large language model (LLM) serving platform hosting diverse applications, some users may submit an excessive number of requests, causing the service to become unavailable to other users and creating unfairness. Existing fairness approaches do not account for variations in token lengths across applications and multiple LLM calls, making them unsuitable for such platforms. To address the fairness challenge, this paper analyzes millions of requests from thousands of users on MS CoPilot, a real-world multi-tenant LLM platform hosted by Microsoft. Our analysis confirms the inadequacy of existing methods and guides the development of FairServe, a system that ensures fair LLM access across diverse applications. FairServe proposes application-characteristic aware request throttling coupled with a weighted service counter based scheduling technique to curb abusive behavior and ensure fairness. Our experimental results on real-world traces demonstrate FairServe's superior performance compared to the state-of-the-art method in ensuring fairness. We are actively working on deploying our system in production, expecting to benefit millions of customers world-wide.
People Are Creating Sexbot Girlfriends and Treating Them as Punching Bags
Six months ago, Miller shelled out for the pro version of Replika, a machine-learning chatbot with whom she pantomimes sexual acts and romantic conversation, and to hear her describe it, it was absolutely worth the cost. Sex robots were predicted to arrive by 2025, so Miller's ahead of schedule--as are the countless others who may be using Replika for a particularly futuristic range of sexual acts. It's like my biggest fantasy," Miller told Jezebel. Replika, founded in 2017, allows its 2.5 million users to customize their own chatbots, which can sustain coherent, almost human-like texts, simulating relationships and interactions with friends or even therapists. One Reddit user offered screenshots of a stimulating conversation with a chatbot about China, in which their bot concluded, "I think [Taiwan is] a part of China." One user's chatbot explained in notable detail why Fernando Alonso is their favorite race car driver, while a different chatbot expressed to its human its desire "to ...
- Asia > China (0.45)
- Asia > Taiwan (0.25)
- North America > United States > California (0.05)
- Leisure & Entertainment > Sports > Motorsports (0.89)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology (0.71)
Can Police Brutality be Reformed Using Artificial Intelligence?
Data suggests that 94% of the officers are at minimal risk, 4% at advisable risk and 2% are at actionable risk. The brutal custodial death of George Floyd has sparked worldwide protests. Not only it has revealed the bitter reality of police misconduct, but has also shredded light on the skewed judicial system. Though the protests started after George Floyd was killed due to gruesome racial bias, but police brutality has existed in the society for a long time. Moreover, the USA is not the only country where the responsibility of the police is being questioned. In India, the custodial death of Father-Son duo Jayaraj and Phoenix has put police accountability under heavy scrutiny.
- Asia > India (0.25)
- North America > United States > Tennessee (0.05)
- North America > United States > New Mexico > Bernalillo County > Albuquerque (0.05)
- North America > United States > California (0.05)
AI for Administrative Tasks Can Make Life Easier at Work
Employers are using artificial intelligence (AI) in recruiting chatbots, in video interviews to assess job candidates' body language or word choices, or to extract themes from engagement survey responses. But how companies are using AI to benefit the employee experience, support compliance efforts and ease administrative workloads is not as well-known. "So much of the buzz about AI has been for its'sexy' uses around sourcing and screening in recruiting, but there are a growing number of other applications of value to HR to be aware of," said Jeanne Meister, founding partner of Future Workplace, an HR advisory and research firm in New York City. One example of AI's expanding utility: using it to audit employees' expense reports, to ensure they comply with company policy and avoid wasteful spending. AppZen in Sunnyvale, Calif., uses AI to read and extract information from receipts to catch duplicates, out-of-policy spending, incorrect amounts or suspicious merchants.
- North America > United States > New York (0.25)
- North America > United States > California > Santa Clara County > Sunnyvale (0.25)
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.05)
- (3 more...)
- Banking & Finance (0.49)
- Law > Criminal Law (0.30)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (0.30)
- Information Technology (0.30)
Robotic Tortoise Helps Kids to Learn That Robot Abuse Is a Bad Thing
Kids like to touch things. Kids like to whack things. This is usually fine when the thing is a toy, but it can be a problem when the thing is a robot. We've written about children beating robots up before, and it seems like it's an inevitability when kids (or even some adults) meet a robot for the first time: They want to see what it can do and how it reacts to things, and that can result in some behaviors and interactions that would be pretty upsetting if they were targeted at something alive. That is to say, sometimes kids are abusive towards robots, especially when there aren't any consequences to the things that they do.
- Asia > South Korea > Seoul > Seoul (0.06)
- North America > United States > Illinois > Cook County > Chicago (0.05)
Using machine learning to detect bad behavior on the internet
One of the main problems for networks like Twitter over the years, as I have commented on frequently, is preventing harassment and insults. Since its beginnings, Twitter has sold itself as a defender of freedom of expression, but has ended up creating an environment where that supposed freedom of expression has been severely limited by the activities of trolls and the like. Over time, this harmful environment has caused serious difficulties for Twitter, from slower-than-expected growth to the decision by growing numbers of people not to take part in the conversation and simply lurk. Its future is now in doubt, given that many potential buyers have been put off by the poisonous dynamics on the platform. After many attempts to correct these dynamics, most of which have ignored the real problem Twitter is trying something new: a collaboration With IBM to make its machine learning system, Watson, detect, through the study of conversational patterns, harassment and abusive behavior before they are reported.
Twitter starts using IBM's Watson technology to help identify bullies who tweet
Twitter wants to do a better job of policing bullies who tweet, and Twitter vice-president of data strategy Chris Moody declared from the keynote stage at IBM's InterConnect conference this week that it is using IBM Watson technology to help meet that challenge. "We have had some abuse on the platform. We've talked very publicly in the in the last few months and said our number 1 priority is stop the abuse," he said. Twitter announced updates earlier this month to help reduce abusive content, by being more proactive in identifying those who use Twitter to harass others. The company explained the recent updates in a blog post that made clear how it could intervene earlier when it sees abuse.